The introduction of relevant physical information into neural network architectures has become a widely used and successful strategy for improving their performance. In lattice gauge theories, such information can be identified with gauge symmetries, which are incorporated into the network layers of our recently proposed Lattice Gauge Equivariant Convolutional Neural Networks (L-CNNs). L-CNNs can generalize better to differently sized lattices than traditional neural networks and are by construction equivariant under lattice gauge transformations. In these proceedings, we present our progress on possible applications of L-CNNs to Wilson flow or continuous normalizing flow. Our methods are based on neural ordinary differential equations which allow us to modify link configurations in a gauge equivariant manner. For simplicity, we focus on simple toy models to test these ideas in practice.
translated by 谷歌翻译
高能物理和晶格田理论的潜在对称发挥的至关重要作用要求在应用于所考虑的物理系统的神经网络架构中实施此类对称性。在这些程序中,我们专注于在网络属性之间纳入翻译成价的后果,特别是在性能和​​泛化方面。通过研究复杂的标量场理论,举例说明了等级网络的益处,其中检查了各种回归和分类任务。对于有意义的比较,通过系统搜索识别有前途的等效和非等效架构。结果表明,在大多数任务中,我们最好的设备架构可以明显更好地表现和概括,这不仅适用于超出培训集中所示的物理参数,还适用于不同的晶格尺寸。
translated by 谷歌翻译
近年来,在格子田地理论的背景下,使用机器学习越来越受欢迎。这些理论的基本要素由对称表示,其包含在神经网络属性中可以在性能和概括性方面导致高奖励。通常在具有周期性边界条件的晶格上表征物理系统的基本对称性是在空间翻译下的增义。在这里,我们调查采用翻译成分的神经网络,以支持非等价的优势。我们考虑的系统是一个复杂的标量字段,其在磁通表示中的二维格子上的四分之一交互,网络在其上执行各种回归和分类任务。有前途的等效和非成型架构被识别有系统搜索。我们证明,在大多数这些任务中,我们最好的体现架构可以比其非等效对应物更好地表现和概括,这不仅适用于训练集中所示的物理参数,还适用于不同的格子尺寸。
translated by 谷歌翻译
在这些诉讼中,我们呈现了格子仪表的卷积神经网络(L-CNNS),其能够从格子仪表理论模拟处理数据,同时完全保留仪表对称性。我们审查了架构的各个方面,并展示了L-CNNS如何代表晶格上的大类仪表不变性和设备的等效功能。我们使用非线性回归问题进行比较L-CNN和非等效网络的性能,并展示用于非等级模型的仪表不变性如何破坏。
translated by 谷歌翻译
我们审查了一种名为晶格计的新颖的神经网络架构,称为格子仪表的卷积神经网络(L-CNNS),可以应用于格子仪表理论中的通用机器学习问题,同时完全保留了规格对称性。我们讨论了用于明确构建规格的规范的衡量标准的概念,该卷大式卷积层和双线性层。使用看似简单的非线性回归任务比较L-CNNS和非成型CNN的性能,其中L-CNNS在与其非成型对应物相比,L-CNNS展示了概括性并在预测中实现了高度的准确性。
translated by 谷歌翻译
我们为晶格计上的普通机器学习应用提出了格子仪表的卷积卷积神经网络(L-CNNS)。在该网络结构的核心,是一种新颖的卷积层,其保留了规范设备,同时在连续的双线性层形成任意形状的威尔逊环。与拓扑信息一起,例如来自Polyakov环路,这样的网络原则上可以近似晶格上的任何仪表协调功能。我们展示了L-CNN可以学习和概括仪表不变的数量,传统的卷积神经网络无法找到。
translated by 谷歌翻译
Coronary Computed Tomography Angiography (CCTA) provides information on the presence, extent, and severity of obstructive coronary artery disease. Large-scale clinical studies analyzing CCTA-derived metrics typically require ground-truth validation in the form of high-fidelity 3D intravascular imaging. However, manual rigid alignment of intravascular images to corresponding CCTA images is both time consuming and user-dependent. Moreover, intravascular modalities suffer from several non-rigid motion-induced distortions arising from distortions in the imaging catheter path. To address these issues, we here present a semi-automatic segmentation-based framework for both rigid and non-rigid matching of intravascular images to CCTA images. We formulate the problem in terms of finding the optimal \emph{virtual catheter path} that samples the CCTA data to recapitulate the coronary artery morphology found in the intravascular image. We validate our co-registration framework on a cohort of $n=40$ patients using bifurcation landmarks as ground truth for longitudinal and rotational registration. Our results indicate that our non-rigid registration significantly outperforms other co-registration approaches for luminal bifurcation alignment in both longitudinal (mean mismatch: 3.3 frames) and rotational directions (mean mismatch: 28.6 degrees). By providing a differentiable framework for automatic multi-modal intravascular data fusion, our developed co-registration modules significantly reduces the manual effort required to conduct large-scale multi-modal clinical studies while also providing a solid foundation for the development of machine learning-based co-registration approaches.
translated by 谷歌翻译
The release of ChatGPT, a language model capable of generating text that appears human-like and authentic, has gained significant attention beyond the research community. We expect that the convincing performance of ChatGPT incentivizes users to apply it to a variety of downstream tasks, including prompting the model to simplify their own medical reports. To investigate this phenomenon, we conducted an exploratory case study. In a questionnaire, we asked 15 radiologists to assess the quality of radiology reports simplified by ChatGPT. Most radiologists agreed that the simplified reports were factually correct, complete, and not potentially harmful to the patient. Nevertheless, instances of incorrect statements, missed key medical findings, and potentially harmful passages were reported. While further studies are needed, the initial insights of this study indicate a great potential in using large language models like ChatGPT to improve patient-centered care in radiology and other medical domains.
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译
The future of population-based breast cancer screening is likely personalized strategies based on clinically relevant risk models. Mammography-based risk models should remain robust to domain shifts caused by different populations and mammographic devices. Modern risk models do not ensure adaptation across vendor-domains and are often conflated to unintentionally rely on both precursors of cancer and systemic/global mammographic information associated with short- and long-term risk, respectively, which might limit performance. We developed a robust, cross-vendor model for long-term risk assessment. An augmentation-based domain adaption technique, based on flavorization of mammographic views, ensured generalization to an unseen vendor-domain. We trained on samples without diagnosed/potential malignant findings to learn systemic/global breast tissue features, called mammographic texture, indicative of future breast cancer. However, training so may cause erratic convergence. By excluding noise-inducing samples and designing a case-control dataset, a robust ensemble texture model was trained. This model was validated in two independent datasets. In 66,607 Danish women with flavorized Siemens views, the AUC was 0.71 and 0.65 for prediction of interval cancers within two years (ICs) and from two years after screening (LTCs), respectively. In a combination with established risk factors, the model's AUC increased to 0.68 for LTCs. In 25,706 Dutch women with Hologic-processed views, the AUCs were not different from the AUCs in Danish women with flavorized views. The results suggested that the model robustly estimated long-term risk while adapting to an unseen processed vendor-domain. The model identified 8.1% of Danish women accounting for 20.9% of ICs and 14.2% of LTCs.
translated by 谷歌翻译